Learning Discriminative Feature Transforms to Low Dimensions in Low Dimentions
نویسنده
چکیده
Abstract The marriage of Renyi entropy with Parzen density estimation has been shown to be a viable tool in learning discriminative feature transforms. However, it suffers from computational complexity proportional to the square of the number of samples in the training data. This sets a practical limit to using large databases. We suggest immediate divorce of the two methods and remarriage of Renyi entropy with a semi-parametric density estimation method, such as a Gaussian Mixture Models (GMM). This allows all of the computation to take place in the low dimensional target space, and it reduces computational complexity proportional to square of the number of components in the mixtures. Furthermore, a convenient extension to Hidden Markov Models as commonly used in speech recognition becomes possible.
منابع مشابه
Learning Discriminative Feature Transforms to Low Dimensions in Low Dimensions
The marriage of Renyi entropy with Parzen density estimation has been shown to be a viable tool in learning discriminative feature transforms. However, it suffers from computational complexity proportional to the square of the number of samples in the training data. This sets a practical limit to using large databases. We suggest immediate divorce of the two methods and remarriage of Renyi entr...
متن کاملدو روش تبدیل ویژگی مبتنی بر الگوریتم های ژنتیک برای کاهش خطای دسته بندی ماشین بردار پشتیبان
Discriminative methods are used for increasing pattern recognition and classification accuracy. These methods can be used as discriminant transformations applied to features or they can be used as discriminative learning algorithms for the classifiers. Usually, discriminative transformations criteria are different from the criteria of discriminant classifiers training or their error. In this ...
متن کاملFeature learning via partial differential equation with applications to face recognition
Feature learning is a critical step in pattern recognition, such as image classification. However, most of the existing methods cannot extract features that are discriminative and at the same time invariant under some transforms. This limits the classification performance, especially in the case of small training sets. To address this issue, in this paper we propose a novel Partial Differential...
متن کاملVisual Learning by Feature Combination and Feature Construction
In developing a visual learning method, the selection of features highly affects the performance of the method. However, the optimal features generally depend on learning tasks. Therefore, it is necessary for the effective learning to find the optimal features according to the learning task. In this paper, we propose two types of new visual learning methods; the feature combination method and t...
متن کاملFeature Extraction by Non-Parametric Mutual Information Maximization
We present a method for learning discriminative feature transforms using as criterion the mutual information between class labels and transformed features. Instead of a commonly used mutual information measure based on Kullback-Leibler divergence, we use a quadratic divergence measure, which allows us to make an efficient non-parametric implementation and requires no prior assumptions about cla...
متن کامل